concentration strength
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Li, Chengzhengxu, Liu, Xiaoming, Zhang, Zhaohan, Wang, Yichen, Liu, Chen, Lan, Yu, Shen, Chao
Recent advances in prompt optimization have notably enhanced the performance of pre-trained language models (PLMs) on downstream tasks. However, the potential of optimized prompts on domain generalization has been under-explored. To explore the nature of prompt generalization on unknown domains, we conduct pilot experiments and find that (i) Prompts gaining more attention weight from PLMs' deep layers are more generalizable and (ii) Prompts with more stable attention distributions in PLMs' deep layers are more generalizable. Thus, we offer a fresh objective towards domain-generalizable prompts optimization named "Concentration", which represents the "lookback" attention from the current decoding token to the prompt tokens, to increase the attention strength on prompts and reduce the fluctuation of attention distribution. We adapt this new objective to popular soft prompt and hard prompt optimization methods, respectively. Extensive experiments demonstrate that our idea improves comparison prompt optimization methods by 1.42% for soft prompt generalization and 2.16% for hard prompt generalization in accuracy on the multi-source domain generalization setting, while maintaining satisfying in-domain performance. The promising results validate the effectiveness of our proposed prompt optimization objective and provide key insights into domain-generalizable prompts.
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.69)
- (2 more...)